Telegram Group & Telegram Channel
💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️



tg-me.com/RIMLLab/213
Create:
Last Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

BY RIML Lab


Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283

Share with your friend now:
tg-me.com/RIMLLab/213

View MORE
Open in Telegram


RIML Lab Telegram | DID YOU KNOW?

Date: |

To pay the bills, Mr. Durov is issuing investors $1 billion to $1.5 billion of company debt, with the promise of discounted equity if the company eventually goes public, the people briefed on the plans said. He has also announced plans to start selling ads in public Telegram channels as soon as later this year, as well as offering other premium services for businesses and users.

Telegram Gives Up On Crypto Blockchain Project

Durov said on his Telegram channel today that the two and a half year blockchain and crypto project has been put to sleep. Ironically, after leaving Russia because the government wanted his encryption keys to his social media firm, Durov’s cryptocurrency idea lost steam because of a U.S. court. “The technology we created allowed for an open, free, decentralized exchange of value and ideas. TON had the potential to revolutionize how people store and transfer funds and information,” he wrote on his channel. “Unfortunately, a U.S. court stopped TON from happening.”

RIML Lab from us


Telegram RIML Lab
FROM USA